My Cheap AWS setup to host Immich files on S3
Table of Contents
I hosted Immich on my Raspberry Pi, but I was running out of storage. Since Immich still does not support S3 natively, I wanted to explore non-conventional options to host the service. In this article, I will share my setup and tricks to keep costs as low as possible.
I used S3Backer with NBD support to save Immich files on S3, while maintaining a filesystem-compatible and Spot Instances to lower compute costs.
I initially attempted to use IPv6-only networking, but at the time of writing this article, it remains infeasible. You can read the details in my other article, where I detail all the issues I encountered; this marked the beginning of a trip to explore how to host and use IPv6-only services in AWS, discovering a whole new world of limitations and unexpected issues.
1. Use TailScale to keep your instance private and avoid using a NAT Gateway #
I use TailScale on my home devices to avoid exposing ports and keep everything private (perhaps this will be another article, as I’ve found bliss in using my own domain name, instead of TailScale’s internal one).
First, you might use the default VPC (although I don’t recommend it) and make your instance public, without an Elastic IP. Create a custom security group and allow only outbound traffic. Don’t worry about SSH access, as AWS Systems Manager Agent and TailScale will handle shell access through the console. I prefer to create my own custom VPC (and I think you should do it)
Once you install and set up TailScale, don’t forget to add –ssh to your “tailscale up” command to enable sshing into your instance without opening additional ports on your security group. With this setup, even if your instance is public, it will remain inaccessible from the Internet.
2. Use Spot Instances #
Spot instances are cheap, and for a home setup could be more reliable than your internet connection (or if you are extremely “lucky” like me, they are more reliable than our local energy provider, which makes micro-blackouts at least three times a week).
Currently, it is sufficient to launch a single instance from the launch wizard and enable the spot request. I used a t4g.medium instance. Try to set a reasonable maximum price, as the price fluctuates. Once AWS hibernates your instance, it can only be resumed when the spot price falls below what you set.
Example: If the current spot price is $0.02, try to set your spot maximum price at least $0.05, so you will not encounter frequent interruptions and end up paying less than the full on-demand price.
To keep things simple, we will use hibernation, so there’s no need for complicated setups or to set an external database (but keep in mind that this could help a lot; stay tuned for the next episodes: we will try to use Aurora Serverless and spot fleets to maximize our uptime). Hibernating your instance will also prevent issues with S3Backer lockfiles and tokens, which can occur if your instance stops and the filesystem isn’t unmounted cleanly.
3. Set up your instance role #
Luckily for us, S3Backer uses the instance profile, so you have to add the permission to the instance profile role. I used the following policy attached, so it will be able to operate only on its assigned bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*",
"s3-object-lambda:*"
],
"Resource": [
"arn:aws:s3:::yourbucket",
"arn:aws:s3:::yourbucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": "*"
}
]
}
Take note of the instance profile you create; it will be used in the config file later. Also, add the usual SSMManagedInstanceCore managed policy to the role, so you can access your new server using Amazon SSM Agent, without exposing SSH
4. Setup s3backer #
4.1 Install the dependencies #
Install the packages at https://github.com/archiecobbs/s3backer/wiki/Build-and-Install#building-and-installing-from-distribution-source
sudo apt-get install libcurl4-openssl-dev libfuse-dev libexpat1-dev libssl-dev zlib1g-dev pkg-config
Then install the additional dependencies to build on a vanilla ubuntu:
apt install build-essential \
autoreconf \
autoconfig \
autoconf \
fuse3-dev \
libfuse3-dev \
libtool-bin \
nbdkit-devel \
nbdkit \
nbdkit-plugin-dev \
nbd-client
Finally, issue the rebuild and install it
./rebuild.sh
./configure --enable-nbd
make
sudo make install
4.2 Configure s3backer and fstab #
Follow the guide at https://github.com/archiecobbs/s3backer/wiki/Configuring-fstab
I don’t like fuse, and I am relying on S3 encryption, so here’s my config file:
# s3backer config
# Geometry
--size=500g
--blockSize=256k
# Credentials
# Acquire S3 credentials from EC2 machine via IAM role
# Use v2 for metedata service
--accessEC2IAM=AWSServer-role
--accessEC2IAM-IMDSv2
# List blocks on startup
--listBlocks
--listBlocksThreads=50
# Encryption
--ssl
#--encrypt
#--passwordFile=/etc/s3b-passwd
# Block cache
--blockCacheSize=20000
--blockCacheFile=/opt/s3backer/cachefile
--blockCacheWriteDelay=15000
--blockCacheThreads=4
--blockCacheRecoverDirtyBlocks
--blockCacheNumProtected=1000
#Region
--region=eu-north-1
# Misc
--timeout=90
#--force
We are now ready to try it:
s3backer --nbd --configFile=/etc/s3-backer-options.conf yourbucketname /dev/nbd0
It can be useful to set a hostname different from the AWS default one; otherwise, the s3backer lockfile will be different if the instance is started from an AMI image, which will become useful for spot fleet installation.
Also, if something goes wrong, you can download the lockfile and see if everything looks good Follow the instructions here, to make the hostname persistent: https://repost.aws/knowledge-center/linux-static-hostname
If everything goes as expected, you should see this output
2025-11-02 17:09:05 INFO: reading meta-data from cache file "/opt/s3backer/cachefile"
2025-11-02 17:09:05 INFO: loaded cache file "/opt/s3backer/cachefile" with 20000 free and 0 used blocks (max index 0)
2025-11-02 17:09:05 INFO: established new mount token 0x6855ebc8
s3backer: connecting yourbucket to /dev/nbd0 and daemonizing
We can now format our device and then mount it!
mkfs.ext4 /dev/nbd0
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 131072000 4k blocks and 32768000 inodes
Filesystem UUID: 0019b91e-9aea-408b-bd91-214bb11c648a
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done
root@i-01e84b18754cd50ee:/opt/s3backer# mkdir /data
root@i-01e84b18754cd50ee:/opt/s3backer# mount /dev/nbd0 /data/
root@i-01e84b18754cd50ee:/opt/s3backer# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 19G 6.8G 12G 37% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 766M 1.1M 765M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 128K 3.7K 125K 3% /sys/firmware/efi/efivars
/dev/nvme0n1p16 891M 116M 713M 14% /boot
/dev/nvme0n1p15 98M 6.4M 92M 7% /boot/efi
tmpfs 383M 12K 383M 1% /run/user/1000
/dev/nbd0 492G 28K 467G 1% /data
root@i-01e84b18754cd50ee:/opt/s3backer# touch /data/test
root@i-01e84b18754cd50ee:/opt/s3backer# rm /data/test
We are ready to finalize our configuration. As the guide says, let’s create a systemd unit file:
# /lib/systemd/system/s3backer-nbd.service
# systemd service file for running s3backer in NBD mode
[Unit]
Description=s3backer running in NBD mode
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/archiecobbs/s3backer
[Install]
WantedBy=multi-user.target
[Service]
Type=forking
ExecStart=/usr/bin/s3backer --nbd --configFile=/etc/s3-backer-options.conf yourbucket /dev/nbd0
# Security hardening
ProtectSystem=full
#ProtectHome=read-only
ProtectHostname=true
ProtectClock=true
ProtectKernelTunables=true
ProtectKernelLogs=true
ProtectControlGroups=true
RestrictRealtime=true
Once edited, issue
$ systemctl daemon-reload
$ systemctl enable s3backer-nbd.service
and configure /etc/fstab to mount at boot
/dev/nbd0 /data ext4 _netdev,discard,noatime,nodiratime,x-systemd.requires=s3backer-nbd.service 0 0
And then, to check that everything’s fine, issue a reboot command! We don’t want something to go wrong with our images
Now, it’s time to create an AMI image, to be sure that our base image will be available for other use cases
5. Install immich #
It is now time to install and configure Immich. The only thing we have to take care is to use our new s3backer volume to host image files (don’t use it for the database server)
5.1 Install Docker #
Follow the instructions on Docker’s site. If you use Ubuntu: https://docs.docker.com/engine/install/ubuntu/
5.2 Configure Immich #
The only modification we have to make to our immich config file is to set the upload location in the s3backer volume we defined. I used /data/immich/library:
# The location where your uploaded files are stored
UPLOAD_LOCATION=/data/immich/library
To check that everything goes as expected, and don’t forget to add your user to the Docker group, log out and log in again:
sudo usermod -aG docker yourusername
groups
#check that the user is in the docker group
docker compose up -d
docker compose logs -f
Are we done yet? Obviously not!
7. Back up!! #
We need a good backup solution! We can leverage AWS backup for our EC2 Instance, but it is still better to save our photos in another S3 Bucket, since data corruption can happen. We also want a complete database dump, as we didn’t externalize the service; it is running locally on our EC2 instance, and AWS Backup does not take application-aware backups.
I like restic and resticprofile. I’ll leave some tips here. My advice is to find your technology. Restic supports S3, encryption, and is a robust solution. I will write an article in the future about my configuration, as it also features notifications in the event of failure and health checks.
7.1 ResticProfile sample configuration #
Don’t forget to enable systemd login linger for your user; otherwise, your user’s cron jobs will be executed only when you log in:
loginctl enable-linger **yourusername**
awsphoto:
repository: "s3:https://s3.amazonaws.com/yours3backupbucket"
password-file: "passwords3.txt"
backup:
# verbose: true
# check-before: true
exclude-caches: true
exclude:
- /opt/immich/postgres/
source:
- "/data/immich"
run-before: docker exec -t immich_postgres pg_dumpall --clean --if-exists --username=postgres > /data/immich/immich-database.sql
# run-finally: sudo umount /backup
#check:
# schedule: "*-*-01 03:00"
retention:
before-backup: false
after-backup: true
keep-daily: 7
keep-weekly: 4
keep-monthly: 12
keep-yearly: 75
prune: true
That’s all for now! As I said, this setup is the cheapest possible, but please note that you might encounter issues in the future. I’m still experimenting and testing, hoping that a native solution will be implemented in Immch In the next episode, we’ll try to increase availability
Where to go next #
What you can experiment next:
- Get notified when your instance is hibernated or resumes
- Explore other services that can run on your EC2 instance
- Get an SSL certificate for your Immich server on your TailNet (spoiler: I used Caddy)
- Tell me how it goes, and if I forgot something